44 research outputs found

    Preliminary analysis of the effects of confirmation bias on software defect density

    Get PDF
    In cognitive psychology, confirmation bias is defined as the tendency of people to verify hypotheses rather than refuting them. During unit testing software developers should aim to fail their code. However, due to confirmation bias, most defects might be overlooked leading to an increase in software defect density. In this research, we empirically analyze the effect of confirmation bias of software developers on software defect density

    Rediscovery Datasets: Connecting Duplicate Reports

    Full text link
    The same defect can be rediscovered by multiple clients, causing unplanned outages and leading to reduced customer satisfaction. In the case of popular open source software, high volume of defects is reported on a regular basis. A large number of these reports are actually duplicates / rediscoveries of each other. Researchers have analyzed the factors related to the content of duplicate defect reports in the past. However, some of the other potentially important factors, such as the inter-relationships among duplicate defect reports, are not readily available in defect tracking systems such as Bugzilla. This information may speed up bug fixing, enable efficient triaging, improve customer profiles, etc. In this paper, we present three defect rediscovery datasets mined from Bugzilla. The datasets capture data for three groups of open source software projects: Apache, Eclipse, and KDE. The datasets contain information about approximately 914 thousands of defect reports over a period of 18 years (1999-2017) to capture the inter-relationships among duplicate defects. We believe that sharing these data with the community will help researchers and practitioners to better understand the nature of defect rediscovery and enhance the analysis of defect reports

    Empirical analyses of the factors affecting confirmation bias and the effects of confirmation bias on software developer/tester performance

    Get PDF
    Background: During all levels of software testing, the goal should be to fail the code. However, software developers and testers are more likely to choose positive tests rather than negative ones due to the phenomenon called confirmation bias. Confirmation bias is defined as the tendency of people to verify their hypotheses rather than refuting them. In the literature, there are theories about the possible effects of confirmation bias on software development and testing. Due to the tendency towards positive tests, most of the software defects remain undetected, which in turn leads to an increase in software defect density. Aims: In this study, we analyze factors affecting confirmation bias in order to discover methods to circumvent confirmation bias. The factors, we investigate are experience in software development/testing and reasoning skills that can be gained through education. In addition, we analyze the effect of confirmation bias on software developer and tester performance. Method: In order to measure and quantify confirmation bias levels of software developers/testers, we prepared pen-and-paper and interactive tests based on two tasks from cognitive psychology literature. These tests were conducted on the 36 employees of a large scale telecommunication company in Europe as well as 28 graduate computer engineering students of Bogazici University, resulting in a total of 64 subjects. We evaluated the outcomes of these tests using the metrics we proposed in addition to some basic methods which we inherited from the cognitive psychology literature. Results: Results showed that regardless of experience in software development/testing, abilities such as logical reasoning and strategic hypotheses testing are differentiating factors in low confirmation bias levels. Moreover, the results of the analysis to investigate the relationship between code defect density and confirmation bias levels of software developers and testers showed that there is a direct correlation between confirmation bias and defect proneness of the code. Conclusions: Our findings show that having strong logical reasoning and hypothesis testing skills are differentiating factors in the software developer/tester performance in terms of defect rates. We recommend that companies should focus on improving logical reasoning and hypothesis testing skills of their employees by designing training programs. As future work, we plan to replicate this study in other software development companies. Moreover, we will use confirmation bias metrics in addition to product and process metrics in for software defect prediction. We believe that confirmation bias metrics would improve the prediction performance of learning based defect prediction models which we have been building over a decade

    Modeling Human Aspects to Enhance Software Quality Management

    Get PDF
    The aim of the research is to explore the impact of cognitive biases and social networks in testing and developing software. The research will aim to address two critical areas: i) to predict defective parts of the software, ii) to determine the right person to test the defective parts of the software. Every phase in software development requires analytical problem solving skills. Moreover, using everyday life heuristics instead of laws of logic and mathematics may affect quality of the software product in an undesirable manner. The proposed research aims to understand how mind works in solving problems. People also work in teams in software development that their social interactions in solving a problem may affect the quality of the product. The proposed research also aims to model the social network structure of testers and developers to understand their impact on software quality and defect prediction performance

    An analysis of the effects of company culture, education and experience on confirmation bias levels of software developers and testers

    Get PDF
    In this paper, we present a preliminary analysis of factors such as company culture, education and experience, on confirmation bias levels of software developers and testers. Confirmation bias is defined as the tendency of people to verify their hypotheses rather than refuting them and thus it has an effect on all software testing

    Towards a Metric Suite Proposal to Quantify Confirmation Biases of Developers

    Get PDF
    The goal of software metrics is the identification and measurement of the essential parameters that affect software development. Metrics can be used to improve software quality and productivity. Existing metrics in the literature are mostly product or process related. However, thought processes of people have a significant impact on software quality as software is designed, implemented and tested by people. Therefore, in defining new metrics, we need to take into account human cognitive aspects. Our research aims to address this need through the proposal of a new metric scheme to quantify a specific human cognitive aspect, namely "confirmation bias". In our previous research, in order to quantify confirmation bias, we defined a methodology to measure confirmation biases of people. In this research, we propose a metric suite that would be used by practitioners during daily decision making. Our proposed metric set consists of six metrics with a theoretical basis in cognitive psychology and measurement theory. Empirical sample of these metrics are collected from two software companies that are specialized in two different domains in order to demonstrate their feasibility. We suggest ways in which practitioners may use these metrics to improve software development process
    corecore